7 research outputs found

    Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources

    Get PDF
    Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset

    Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources

    Get PDF
    Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset

    Haptic Texture Rendering and Perception Using Coil Array Magnetic Levitation Haptic Interface: Effects of Torque Feedback and Probe Type on Roughness Perception

    Get PDF
    M.S. University of Hawaii at Manoa 2016.Includes bibliographical references.A Novel maglev-based haptic platform was deployed to investigate the effects of torque feedback and stylus type on human roughness perception. For this purpose, two haptic probes, fingertip and penhandle, were 3D printed each with one and four embedded magnets respectively. Three different torque renderings namely No Torque, Slope Torque, and Stiff Torque were developed, in tendem with penetration-based force feedback to render simulated surfaces. The main difference between these conditions was the amount and type of active torque that was generated. Conventional magnitude estimation experiment for data gathering and analysis was performed. The results of the experiment showed strong effects of wavelength within all torques and probes. Participants rated surfaces rougher in the Slope Torque and with the fingertip compared to penhandle. These results revealed new means of torque-based surface generation that lead to higher roughness perception. The outcomes also highlight the importance of probe type on human roughness perception
    corecore